流程的执行留下了信息系统中事件数据的痕迹。这些事件数据可以通过过程挖掘技术进行分析。对于传统的流程​​挖掘技术,必须将每个事件与一个对象(例如公司的客户)相关联。与一个对象相关的事件形成一个称为案例的事件序列。一个案例描述了通过流程进行的端到端运行。事件数据中包含的案例可用于发现过程模型,检测频繁的瓶颈或学习预测模型。但是,在现实生活中遇到的事件,例如ERP系统通常可以与多个对象关联。传统的顺序案例概念缺少这些以对象为中心的事件数据,因为这些数据显示了图形结构。一个人可能会通过使其变色将以对象为中心的事件数据迫使传统案例概念。但是,扁平化操纵数据并删除信息。因此,与传统事件日志的案例概念相似的概念对于启用以对象为中心的事件数据应用不同的过程挖掘任务是必要的。在本文中,我们介绍了以对象为中心的过程挖掘的案例概念:过程执行。这些是基于图形的案例概括,如传统过程采矿中所考虑的。此外,我们提供了提取过程执行的技术。基于这些执行,我们确定了使用图同构的属性相对于属性的等效过程行为。关于事件活动的等效过程执行是以对象为中心的变体,即传统过程挖掘中变体的概括。我们为以对象为中心的变体提供了可视化技术。贡献的可伸缩性和效率得到了广泛的评估。此外,我们提供了一个案例研究,显示了现实生活中最常见的以对象为中心的变体。
translated by 谷歌翻译
高能物理和晶格田理论的潜在对称发挥的至关重要作用要求在应用于所考虑的物理系统的神经网络架构中实施此类对称性。在这些程序中,我们专注于在网络属性之间纳入翻译成价的后果,特别是在性能和​​泛化方面。通过研究复杂的标量场理论,举例说明了等级网络的益处,其中检查了各种回归和分类任务。对于有意义的比较,通过系统搜索识别有前途的等效和非等效架构。结果表明,在大多数任务中,我们最好的设备架构可以明显更好地表现和概括,这不仅适用于超出培训集中所示的物理参数,还适用于不同的晶格尺寸。
translated by 谷歌翻译
近年来,在格子田地理论的背景下,使用机器学习越来越受欢迎。这些理论的基本要素由对称表示,其包含在神经网络属性中可以在性能和概括性方面导致高奖励。通常在具有周期性边界条件的晶格上表征物理系统的基本对称性是在空间翻译下的增义。在这里,我们调查采用翻译成分的神经网络,以支持非等价的优势。我们考虑的系统是一个复杂的标量字段,其在磁通表示中的二维格子上的四分之一交互,网络在其上执行各种回归和分类任务。有前途的等效和非成型架构被识别有系统搜索。我们证明,在大多数这些任务中,我们最好的体现架构可以比其非等效对应物更好地表现和概括,这不仅适用于训练集中所示的物理参数,还适用于不同的格子尺寸。
translated by 谷歌翻译
在这些诉讼中,我们呈现了格子仪表的卷积神经网络(L-CNNS),其能够从格子仪表理论模拟处理数据,同时完全保留仪表对称性。我们审查了架构的各个方面,并展示了L-CNNS如何代表晶格上的大类仪表不变性和设备的等效功能。我们使用非线性回归问题进行比较L-CNN和非等效网络的性能,并展示用于非等级模型的仪表不变性如何破坏。
translated by 谷歌翻译
我们审查了一种名为晶格计的新颖的神经网络架构,称为格子仪表的卷积神经网络(L-CNNS),可以应用于格子仪表理论中的通用机器学习问题,同时完全保留了规格对称性。我们讨论了用于明确构建规格的规范的衡量标准的概念,该卷大式卷积层和双线性层。使用看似简单的非线性回归任务比较L-CNNS和非成型CNN的性能,其中L-CNNS在与其非成型对应物相比,L-CNNS展示了概括性并在预测中实现了高度的准确性。
translated by 谷歌翻译
我们为晶格计上的普通机器学习应用提出了格子仪表的卷积卷积神经网络(L-CNNS)。在该网络结构的核心,是一种新颖的卷积层,其保留了规范设备,同时在连续的双线性层形成任意形状的威尔逊环。与拓扑信息一起,例如来自Polyakov环路,这样的网络原则上可以近似晶格上的任何仪表协调功能。我们展示了L-CNN可以学习和概括仪表不变的数量,传统的卷积神经网络无法找到。
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译